232 research outputs found

    An Empirical Analysis about Population, Technological Progress, and Economic Growth in Taiwan

    Get PDF
    This paper empirically analyzed the relationship between population, technological progress, and economic growth in Taiwan from 1954 to 2005, using the LA-VAR (lag-augmented vector autoregression) model. The empirical results reveal that a major conformational change in the economic development of Taiwan after 2000.

    On the Impossibility of General Parallel Fast-Forwarding of Hamiltonian Simulation

    Get PDF
    Hamiltonian simulation is one of the most important problems in the field of quantum computing. There have been extended efforts on designing algorithms for faster simulation, and the evolution time T for the simulation greatly affect algorithm runtime as expected. While there are some specific types of Hamiltonians that can be fast-forwarded, i.e., simulated within time o(T), for some large classes of Hamiltonians (e.g., all local/sparse Hamiltonians), existing simulation algorithms require running time at least linear in the evolution time T. On the other hand, while there exist lower bounds of ?(T) circuit size for some large classes of Hamiltonian, these lower bounds do not rule out the possibilities of Hamiltonian simulation with large but "low-depth" circuits by running things in parallel. As a result, physical systems with system size scaling with T can potentially do a fast-forwarding simulation. Therefore, it is intriguing whether we can achieve fast Hamiltonian simulation with the power of parallelism. In this work, we give a negative result for the above open problem in various settings. In the oracle model, we prove that there are time-independent sparse Hamiltonians that cannot be simulated via an oracle circuit of depth o(T). In the plain model, relying on the random oracle heuristic, we show that there exist time-independent local Hamiltonians and time-dependent geometrically local Hamiltonians on n qubits that cannot be simulated via an oracle circuit of depth o(T/n^c), where the Hamiltonians act on n qubits, and c is a constant. Lastly, we generalize the above results and show that any simulators that are geometrically local Hamiltonians cannot do the simulation much faster than parallel quantum algorithms

    Is deck B a disadvantageous deck in the Iowa Gambling Task?

    Get PDF
    BACKGROUND: The Iowa gambling task is a popular test for examining monetary decision behavior under uncertainty. According to Dunn et al. review article, the difficult-to-explain phenomenon of "prominent deck B" was revealed, namely that normal decision makers prefer bad final-outcome deck B to good final-outcome decks C or D. This phenomenon was demonstrated especially clearly by Wilder et al. and Toplak et al. The "prominent deck B" phenomenon is inconsistent with the basic assumption in the IGT; however, most IGT-related studies utilized the "summation" of bad decks A and B when presenting their data, thereby avoiding the problems associated with deck B. METHODS: To verify the "prominent deck B" phenomenon, this study launched a two-stage simple version IGT, namely, an AACC and BBDD version, which possesses a balanced gain-loss structure between advantageous and disadvantageous decks and facilitates monitoring of participant preferences after the first 100 trials. RESULTS: The experimental results suggested that the "prominent deck B" phenomenon exists in the IGT. Moreover, participants cannot suppress their preference for deck B under the uncertain condition, even during the second stage of the game. Although this result is incongruent with the basic assumption in IGT, an increasing number of studies are finding similar results. The results of the AACC and BBDD versions can be congruent with the decision literatures in terms of gain-loss frequency. CONCLUSION: Based on the experimental findings, participants can apply the "gain-stay, loss-shift" strategy to overcome situations involving uncertainty. This investigation found that the largest loss in the IGT did not inspire decision makers to avoid choosing bad deck B

    Attribute-Based Encryption for Circuits of Unbounded Depth from Lattices: Garbled Circuits of Optimal Size, Laconic Functional Evaluation, and More

    Get PDF
    Although we have known about fully homomorphic encryption (FHE) from circular security assumptions for over a decade [Gentry, STOC \u2709; Brakerski–Vaikuntanathan, FOCS \u2711], there is still a significant gap in understanding related homomorphic primitives supporting all *unrestricted* polynomial-size computations. One prominent example is attribute-based encryption (ABE). The state-of-the-art constructions, relying on the hardness of learning with errors (LWE) [Gorbunov–Vaikuntanathan–Wee, STOC \u2713; Boneh et al., Eurocrypt \u2714], only accommodate circuits up to a *predetermined* depth, akin to leveled homomorphic encryption. In addition, their components (master public key, secret keys, and ciphertexts) have sizes polynomial in the maximum circuit depth. Even in the simpler setting where a single key is published (or a single circuit is involved), the depth dependency persists, showing up in constructions of 1-key ABE and related primitives, including laconic function evaluation (LFE), 1-key functional encryption (FE), and reusable garbling schemes. So far, the only approach of eliminating depth dependency relies on indistinguishability obfuscation. An interesting question that has remained open for over a decade is whether the circular security assumptions enabling FHE can similarly benefit ABE. In this work, we introduce new lattice-based techniques to overcome the depth-dependency limitations: - Relying on a circular security assumption, we construct LFE, 1-key FE, 1-key ABE, and reusable garbling schemes capable of evaluating circuits of unbounded depth and size. - Based on the *evasive circular* LWE assumption, a stronger variant of the recently proposed *evasive* LWE assumption [Wee, Eurocrypt \u2722; Tsabary, Crypto \u2722], we construct a full-fledged ABE scheme for circuits of unbounded depth and size. Our LFE, 1-key FE, and reusable garbling schemes achieve optimal succinctness (up to polynomial factors in the security parameter). Their ciphertexts and input encodings have sizes linear in the input length, while function digest, secret keys, and garbled circuits have constant sizes independent of circuit parameters (for Boolean outputs). In fact, this gives the first constant-size garbled circuits without relying on indistinguishability obfuscation. Our ABE schemes offer short components, with master public key and ciphertext sizes linear in the attribute length and secret key being constant-size

    Aorta Fluorescence Imaging by Using Confocal Microscopy

    Get PDF
    The activated leukocyte attacked the vascular endothelium and the associated increase in VEcadherin number was observed in experiments. The confocal microscopic system with a prism-based wavelength filter was used for multiwavelength fluorescence measurement. Multiwavelength fluorescence imaging based on the VEcadherin within the aorta segment of a rat was achieved. The confocal microscopic system capable of fluorescence detection of cardiovascular tissue is a useful tool for measuring the biological properties in clinical applications

    Group Signatures and Accountable Ring Signatures from Isogeny-based Assumptions

    Get PDF
    Group signatures are an important cryptographic primitive providing both anonymity and accountability to signatures. Accountable ring signatures combine features from both ring signatures and group signatures, and can be directly transformed to group signatures. While there exists extensive work on constructing group signatures from various post-quantum assumptions, there has not been any using isogeny-based assumptions. In this work, we propose the first construction of isogeny-based group signatures, which is a direct result of our isogeny-based accountable ring signature. This is also the first construction of accountable ring signatures based on post-quantum assumptions. Our schemes are based on the decisional CSIDH assumption (D-CSIDH) and are proven secure under the random oracle model (ROM)

    Patient-oriented simulation based on Monte Carlo algorithm by using MRI data

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Although Monte Carlo simulations of light propagation in full segmented three-dimensional MRI based anatomical models of the human head have been reported in many articles. To our knowledge, there is no patient-oriented simulation for individualized calibration with NIRS measurement. Thus, we offer an approach for brain modeling based on image segmentation process with <it>in vivo </it>MRI T1 three-dimensional image to investigate the individualized calibration for NIRS measurement with Monte Carlo simulation.</p> <p>Methods</p> <p>In this study, an individualized brain is modeled based on <it>in vivo </it>MRI 3D image as five layers structure. The behavior of photon migration was studied for this individualized brain detections based on three-dimensional time-resolved Monte Carlo algorithm. During the Monte Carlo iteration, all photon paths were traced with various source-detector separations for characterization of brain structure to provide helpful information for individualized design of NIRS system.</p> <p>Results</p> <p>Our results indicate that the patient-oriented simulation can provide significant characteristics on the optimal choice of source-detector separation within 3.3 cm of individualized design in this case. Significant distortions were observed around the cerebral cortex folding. The spatial sensitivity profile penetrated deeper to the brain in the case of expanded CSF. This finding suggests that the optical method may provide not only functional signal from brain activation but also structural information of brain atrophy with the expanded CSF layer. The proposed modeling method also provides multi-wavelength for NIRS simulation to approach the practical NIRS measurement.</p> <p>Conclusions</p> <p>In this study, the three-dimensional time-resolved brain modeling method approaches the realistic human brain that provides useful information for NIRS systematic design and calibration for individualized case with prior MRI data.</p
    corecore